Step Acceleration Based Training Algorithm for Feedforward Neural Networks
نویسندگان
چکیده
This paper presents a very fast step acceleration based training algorithm (SATA) for multilayer feedforward neural network training. The most outstanding virtue of this algorithm is that it does not need to calculate the gradient of the target function. In each iteration step, the computation only concentrates on the corresponding varied part. The proposed algorithm has attributes in simplicity, flexibility and feasibility, as well as high speed of convergence. Compared with the other methods, including the conventional BP, the conjugate gradient (CG), and the BP based on weight extrapolation (BPWE), many simulations have confirmed the superiority of this algorithm in terms of converging speed and computation time required.
منابع مشابه
Evolutionary Discovery of Learning Rules for Feedforward Neural Networks with Step Activation Function
Neural networks with step activation function can be very efficient ways of performing non linear mappings. However, no standard learning algorithm exists for training this kind of neural networks. In this work we use Genetic Programming (GP) to discover supervised learning algorithms which can train neural networks with step activation function. Thanks to GP, a new learning algorithm has been ...
متن کاملA new EM-based training algorithm for RBF networks
In this paper, we propose a new Expectation-Maximization (EM) algorithm which speeds up the training of feedforward networks with local activation functions such as the Radial Basis Function (RBF) network. In previously proposed approaches, at each E-step the residual is decomposed equally among the units or proportionally to the weights of the output layer. However, these approaches tend to sl...
متن کاملNumerical solution of fuzzy linear Fredholm integro-differential equation by \fuzzy neural network
In this paper, a novel hybrid method based on learning algorithmof fuzzy neural network and Newton-Cotesmethods with positive coefficient for the solution of linear Fredholm integro-differential equation of the second kindwith fuzzy initial value is presented. Here neural network isconsidered as a part of large field called neural computing orsoft computing. We propose alearning algorithm from ...
متن کاملAn Efficient Optimization Method for Extreme Learning Machine Using Artificial Bee Colony
Traditional learning algorithms with gradient descent based technique, such as back-propagation (BP) and its variant Levenberg-Marquardt (LM) have been widely used in the training of multilayer feedforward neural networks. The gradient descent based algorithm may converge usually slower than required time in training, since many iterative learning step are needed by such learning algorithm, and...
متن کاملMerging Echo State and Feedforward Neural Networks for Time Series Forecasting
Echo state neural networks, which are a special case of recurrent neural networks, are studied from the viewpoint of their learning ability, with a goal to achieve their greater prediction ability. A standard training of these neural networks uses pseudoinverse matrix for one-step learning of weights from hidden to output neurons. Such learning was substituted by backpropagation of error learni...
متن کامل